Goto

Collaborating Authors

 target cell


Uncovering Issues in the Radio Access Network by Looking at the Neighbors

Suárez-Varela, José, Lutu, Andra

arXiv.org Artificial Intelligence

Mobile network operators (MNOs) manage Radio Access Networks (RANs) with massive amounts of cells over multiple radio generations (2G-5G). To handle such complexity, operations teams rely on monitoring systems, including anomaly detection tools that identify unexpected behaviors. In this paper, we present c-ANEMON, a Contextual ANomaly dEtection MONitor for the RAN based on Graph Neural Networks (GNNs). Our solution captures spatio-temporal variations by analyzing the behavior of individual cells in relation to their local neighborhoods, enabling the detection of anomalies that are independent of external mobility factors. This, in turn, allows focusing on anomalies associated with network issues (e.g., misconfigurations, equipment failures). We evaluate c-ANEMON using real-world data from a large European metropolitan area (7,890 cells; 3 months). First, we show that the GNN model within our solution generalizes effectively to cells from previously unseen areas, suggesting the possibility of using a single model across extensive deployment regions. Then, we analyze the anomalies detected by c-ANEMON through manual inspection and define several categories of long-lasting anomalies (6+ hours). Notably, 45.95% of these anomalies fall into a category that is more likely to require intervention by operations teams.


A Cost-Effective Approach to Smooth A* Path Planning for Autonomous Vehicles

Schichler, Lukas, Festl, Karin, Solmaz, Selim, Watzenig, Daniel

arXiv.org Artificial Intelligence

Path planning for wheeled mobile robots is a critical component in the field of automation and intelligent transportation systems. Car-like vehicles, which have non-holonomic constraints on their movement capability impose additional requirements on the planned paths. Traditional path planning algorithms, such as A* , are widely used due to their simplicity and effectiveness in finding optimal paths in complex environments. However, these algorithms often do not consider vehicle dynamics, resulting in paths that are infeasible or impractical for actual driving. Specifically, a path that minimizes the number of grid cells may still be too curvy or sharp for a car-like vehicle to navigate smoothly. This paper addresses the need for a path planning solution that not only finds a feasible path but also ensures that the path is smooth and drivable. By adapting the A* algorithm for a curvature constraint and incorporating a cost function that considers the smoothness of possible paths, we aim to bridge the gap between grid based path planning and smooth paths that are drivable by car-like vehicles. The proposed method leverages motion primitives, pre-computed using a ribbon based path planner that produces smooth paths of minimum curvature. The motion primitives guide the A* algorithm in finding paths of minimal length and curvature. With the proposed modification on the A* algorithm, the planned paths can be constraint to have a minimum turning radius much larger than the grid size. We demonstrate the effectiveness of the proposed algorithm in different unstructured environments. In a two-stage planning approach, first the modified A* algorithm finds a grid-based path and the ribbon based path planner creates a smooth path within the area of grid cells. The resulting paths are smooth with small curvatures independent of the orientation of the grid axes and even in presence of sharp obstacles.


Complex picking via entanglement of granular mechanical metamaterials

Rezanejad, Ashkan, Mousa, Mostafa, Howard, Matthew, Forte, Antonio Elia

arXiv.org Artificial Intelligence

When objects are packed in a cluster, physical interactions are unavoidable. Such interactions emerge because of the objects geometric features; some of these features promote entanglement, while others create repulsion. When entanglement occurs, the cluster exhibits a global, complex behaviour, which arises from the stochastic interactions between objects. We hereby refer to such a cluster as an entangled granular metamaterial. We investigate the geometrical features of the objects which make up the cluster, henceforth referred to as grains, that maximise entanglement. We hypothesise that a cluster composed from grains with high propensity to tangle, will also show propensity to interact with a second cluster of tangled objects. To demonstrate this, we use the entangled granular metamaterials to perform complex robotic picking tasks, where conventional grippers struggle. We employ an electromagnet to attract the metamaterial (ferromagnetic) and drop it onto a second cluster of objects (targets, non-ferromagnetic). When the electromagnet is re-activated, the entanglement ensures that both the metamaterial and the targets are picked, with varying degrees of physical engagement that strongly depend on geometric features. Interestingly, although the metamaterials structural arrangement is random, it creates repeatable and consistent interactions with a second tangled media, enabling robust picking of the latter.


Predictive Handover Strategy in 6G and Beyond: A Deep and Transfer Learning Approach

Panitsas, Ioannis, Mudvari, Akrit, Maatouk, Ali, Tassiulas, Leandros

arXiv.org Artificial Intelligence

Next-generation cellular networks will evolve into more complex and virtualized systems, employing machine learning for enhanced optimization and leveraging higher frequency bands and denser deployments to meet varied service demands. This evolution, while bringing numerous advantages, will also pose challenges, especially in mobility management, as it will increase the overall number of handovers due to smaller coverage areas and the higher signal attenuation. To address these challenges, we propose a deep learning based algorithm for predicting the future serving cell utilizing sequential user equipment measurements to minimize the handover failures and interruption time. Our algorithm enables network operators to dynamically adjust handover triggering events or incorporate UAV base stations for enhanced coverage and capacity, optimizing network objectives like load balancing and energy efficiency through transfer learning techniques. Our framework complies with the O-RAN specifications and can be deployed in a Near-Real-Time RAN Intelligent Controller as an xApp leveraging the E2SM-KPM service model. The evaluation results demonstrate that our algorithm achieves a 92% accuracy in predicting future serving cells with high probability. Finally, by utilizing transfer learning, our algorithm significantly reduces the retraining time by 91% and 77% when new handover trigger decisions or UAV base stations are introduced to the network dynamically.


Autonomous and Adaptive Role Selection for Multi-robot Collaborative Area Search Based on Deep Reinforcement Learning

Zhu, Lina, Cheng, Jiyu, Zhang, Hao, Cui, Zhichao, Zhang, Wei, Liu, Yuehu

arXiv.org Artificial Intelligence

In the tasks of multi-robot collaborative area search, we propose the unified approach for simultaneous mapping for sensing more targets (exploration) while searching and locating the targets (coverage). Specifically, we implement a hierarchical multi-agent reinforcement learning algorithm to decouple task planning from task execution. The role concept is integrated into the upper-level task planning for role selection, which enables robots to learn the role based on the state status from the upper-view. Besides, an intelligent role switching mechanism enables the role selection module to function between two timesteps, promoting both exploration and coverage interchangeably. Then the primitive policy learns how to plan based on their assigned roles and local observation for sub-task execution. The well-designed experiments show the scalability and generalization of our method compared with state-of-the-art approaches in the scenes with varying complexity and number of robots.


Accurate battery lifetime prediction across diverse aging conditions with deep learning

Zhang, Han, Li, Yuqi, Zheng, Shun, Lu, Ziheng, Gui, Xiaofan, Xu, Wei, Bian, Jiang

arXiv.org Artificial Intelligence

Accurately predicting the lifetime of battery cells in early cycles holds tremendous value for battery research and development as well as numerous downstream applications. This task is rather challenging because diverse conditions, such as electrode materials, operating conditions, and working environments, collectively determine complex capacity-degradation behaviors. However, current prediction methods are developed and validated under limited aging conditions, resulting in questionable adaptability to varied aging conditions and an inability to fully benefit from historical data collected under different conditions. Here we introduce a universal deep learning approach that is capable of accommodating various aging conditions and facilitating effective learning under low-resource conditions by leveraging data from rich conditions. Our key finding is that incorporating inter-cell feature differences, rather than solely considering single-cell characteristics, significantly increases the accuracy of battery lifetime prediction and its cross-condition robustness. Accordingly, we develop a holistic learning framework accommodating both single-cell and inter-cell modeling. A comprehensive benchmark is built for evaluation, encompassing 401 battery cells utilizing 5 prevalent electrode materials across 168 cycling conditions. We demonstrate remarkable capabilities in learning across diverse aging conditions, exclusively achieving 10% prediction error using the first 100 cycles, and in facilitating low-resource learning, almost halving the error of single-cell modeling in many cases. More broadly, by breaking the learning boundaries among different aging conditions, our approach could significantly accelerate the development and optimization of lithium-ion batteries.


A Novel Approach for Machine Learning-based Load Balancing in High-speed Train System using Nested Cross Validation

Yazici, Ibrahim, Gures, Emre

arXiv.org Artificial Intelligence

Fifth-generation (5G) mobile communication networks have recently emerged in various fields, including highspeed trains. However, the dense deployment of 5G millimeter wave (mmWave) base stations (BSs) and the high speed of moving trains lead to frequent handovers (HOs), which can adversely affect the Quality-of-Service (QoS) of mobile users. As a result, HO optimization and resource allocation are essential considerations for managing mobility in high-speed train systems. In this paper, we model system performance of a high-speed train system with a novel machine learning (ML) approach that is nested cross validation scheme that prevents information leakage from model evaluation into the model parameter tuning, thereby avoiding overfitting and resulting in better generalization error. To this end, we employ ML methods for the high-speed train system scenario. Handover Margin (HOM) and Time-to-Trigger (TTT) values are used as features, and several KPIs are used as outputs, and several ML methods including Gradient Boosting Regression (GBR), Adaptive Boosting (AdaBoost), CatBoost Regression (CBR), Artificial Neural Network (ANN), Kernel Ridge Regression (KRR), Support Vector Regression (SVR), and k-Nearest Neighbor Regression (KNNR) are employed for the problem. Finally, performance comparisons of the cross validation schemes with the methods are made in terms of mean absolute error (MAE) and mean square error (MSE) metrics are made. As per obtained results, boosting methods, ABR, CBR, GBR, with nested cross validation scheme superiorly outperforms conventional cross validation scheme results with the same methods. On the other hand, SVR, KNRR, KRR, ANN with the nested scheme produce promising results for prediction of some KPIs with respect to their conventional scheme employment.


Computing with Action Potentials

Neural Information Processing Systems

Most computational engineering based loosely on biology uses contin(cid:173) uous variables to represent neural activity. The engineering view is equivalent to using a rate-code for representing information and for computing. An increas(cid:173) ing number of examples are being discovered in which biology may not be using rate codes. Information can be represented using the timing of action potentials, and efficiently computed with in this representation. The "analog match" problem of odour identification is a simple problem which can be efficiently solved using action potential timing and an un(cid:173) derlying rhythm. By using adapting units to effect a fundamental change of representation of a problem, we map the recognition of words (hav(cid:173) ing uniform time-warp) in connected speech into the same analog match problem.


Can We Program Our Cells?

#artificialintelligence

Making living cells blink fluorescently like party lights may sound frivolous. But the demonstration that it's possible could be a step toward someday programming our body's immune cells to attack cancers more effectively and safely. That's the promise of the field called synthetic biology. While molecular biologists strip cells down to their component genes and molecules to see how they work, synthetic biologists tinker with cells to get them to perform new feats -- discovering new secrets about how life works in the process. Listen on Apple Podcasts, Spotify, Google Podcasts, Stitcher, TuneIn or your favorite podcasting app, or you can stream it from Quanta. Steve Strogatz (00:03): I'm Steve Strogatz, and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in science and math today. In this episode, we're going to be talking about synthetic biology. Simply put, we could say that synthetic biology is a fusion of biology, especially molecular biology, and engineering. The distinctive thing about it is that it treats cells as programmable devices. It's a kind of tinker toy approach that builds circuits, but not out of wires and switches like we're used to, but rather out of biological components, like proteins and genes. But also, the approach holds promise for illuminating how life works at the deepest level. It's one thing to strip cells apart to see how they work. But it's another thing to tinker with cells to try to get them to perform new tricks, which is something that my guest, Michael Elowitz, does. For example, a while back, he engineered cells to blink on and off like Christmas lights. Michael Elowitz is a professor of biology and biological engineering at Caltech and Howard Hughes Medical Institute. It's great to be here. Strogatz (01:53): So let's talk about the foundational idea of synthetic biology. I mentioned it in the intro, that's -- that living cells, we could think of as programmable devices. The field, synthetic biology, it seems like you guys have this philosophy that you can learn about cells by building functionality into cells yourself.


Mobility Management in Emerging Ultra-Dense Cellular Networks: A Survey, Outlook, and Future Research Directions

Zaidi, Syed Muhammad Asad, Manalastas, Marvin, Farooq, Hasan, Imran, Ali

arXiv.org Artificial Intelligence

The exponential rise in mobile traffic originating from mobile devices highlights the need for making mobility management in future networks even more efficient and seamless than ever before. Ultra-Dense Cellular Network vision consisting of cells of varying sizes with conventional and mmWave bands is being perceived as the panacea for the eminent capacity crunch. However, mobility challenges in an ultra-dense heterogeneous network with motley of high frequency and mmWave band cells will be unprecedented due to plurality of handover instances, and the resulting signaling overhead and data interruptions for miscellany of devices. Similarly, issues like user tracking and cell discovery for mmWave with narrow beams need to be addressed before the ambitious gains of emerging mobile networks can be realized. Mobility challenges are further highlighted when considering the 5G deliverables of multi-Gbps wireless connectivity, <1ms latency and support for devices moving at maximum speed of 500km/h, to name a few. Despite its significance, few mobility surveys exist with the majority focused on adhoc networks. This paper is the first to provide a comprehensive survey on the panorama of mobility challenges in the emerging ultra-dense mobile networks. We not only present a detailed tutorial on 5G mobility approaches and highlight key mobility risks of legacy networks, but also review key findings from recent studies and highlight the technical challenges and potential opportunities related to mobility from the perspective of emerging ultra-dense cellular networks.